skip to main content


Search for: All records

Creators/Authors contains: "Sharma, Atul"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available September 1, 2024
  2. Federated learning—multi-party, distributed learning in a decentralized environment—is vulnerable to model poisoning attacks, more so than centralized learning. This is because malicious clients can collude and send in carefully tailored model updates to make the global model inaccurate. This motivated the development of Byzantine-resilient federated learning algorithms, such as Krum, Bulyan, FABA, and FoolsGold. However, a recently developed untargeted model poisoning attack showed that all prior defenses can be bypassed. The attack uses the intuition that simply by changing the sign of the gradient updates that the optimizer is computing, for a set of malicious clients, a model can be diverted from the optima to increase the test error rate. In this work, we develop FLAIR—a defense against this directed deviation attack (DDA), a state-of-the-art model poisoning attack. FLAIR is based on ourintuition that in federated learning, certain patterns of gradient flips are indicative of an attack. This intuition is remarkably stable across different learning algorithms, models, and datasets. FLAIR assigns reputation scores to the participating clients based on their behavior during the training phase and then takes a weighted contribution of the clients. We show that where the existing defense baselines of FABA [IJCAI’19], FoolsGold [Usenix ’20], and FLTrust [NDSS ’21] fail when 20-30% of the clients are malicious, FLAIR provides byzantine-robustness upto a malicious client percentage of 45%. We also show that FLAIR provides robustness against even a white-box version of DDA. 
    more » « less
    Free, publicly-accessible full text available July 10, 2024
  3. Standard ML relies on training using a centrally collected dataset, while collaborative learning techniques such as Federated Learning (FL) enable data to remain decentralized at client locations. In FL, a central server coordinates the training process, reducing computation and communication expenses for clients. However, this centralization can lead to server congestion and heightened risk of malicious activity or data privacy breaches. In contrast, Peer-to-Peer Learning (P2PL) is a fully decentralized system where nodes manage both local training and aggregation tasks. While P2PL promotes privacy by eliminating the need to trust a single node, it also results in increased computation and communication costs, along with potential difficulties in achieving consensus among nodes. To address the limitations of both FL and P2PL, we propose a hybrid approach called Hubs-and-Spokes Learning (HSL). In HSL, hubs function similarly to FL servers, maintaining consensus but exerting less control over spokes. This paper argues that HSL’s design allows for greater availability and privacy than FL, while reducing computation and communication costs compared to P2PL. Additionally, HSL maintains consensus and integrity in the learning process. 
    more » « less
    Free, publicly-accessible full text available June 1, 2024
  4. This study highlights innovative, minimally-invasive glucose sensing sutures for monitoring glucose levels in house sparrows.

     
    more » « less
  5. null (Ed.)
    A bstract Multi-collinear factorization limits provide a window to study how locality and unitarity of scattering amplitudes can emerge dynamically from celestial CFT, the conjectured holographic dual to gauge and gravitational theories in flat space. To this end, we first use asymptotic symmetries to commence a systematic study of conformal and Kac-Moody descendants in the OPE of celestial gluons. Recursive application of these OPEs then equips us with a novel holographic method of computing the multi-collinear limits of gluon amplitudes. We perform this computation for some of the simplest helicity assignments of the collinear particles. The prediction from the OPE matches with Mellin transforms of the expressions in the literature to all orders in conformal descendants. In a similar vein, we conclude by studying multi-collinear limits of graviton amplitudes in the leading approximation of sequential double-collinear limits, again finding a consistency check against the leading order OPE of celestial gravitons. 
    more » « less